基于变化的AutoEncoder的语音转换(VAE-VC)具有仅需要对培训的发言和扬声器标签的优势。与VAE-VC中的大部分研究不同,专注于利用辅助损失或离散变量,研究了如何增加模型表达式对VAE-VC的益处和影响。具体而言,我们首先将VAE-VC分析到速率 - 失真的角度,并指出模型表达性对于VAE-VC来说意义重大,因为速率和失真反映了转化的演示的相似性和自然度。基于分析,我们提出了一种使用深层等级vae的新型VC方法,具有高模型表达性,并且由于其非自动增加的解码器而具有快速转换速度。此外,我们的分析揭示了另一个问题,当VAE的潜变量具有冗余信息时,相似性可以降级。通过使用$ \ beta $ -vae目标控制潜在变量中包含的信息来解决问题。在使用VCTK Corpus的实验中,所提出的方法在性别间环境中的自然和相似性上实现了高于3.5的平均意见分数,其高于现有的基于AutoEncoder的VC方法的分数。
translated by 谷歌翻译
Deep image prior (DIP) has recently attracted attention owing to its unsupervised positron emission tomography (PET) image reconstruction, which does not require any prior training dataset. In this paper, we present the first attempt to implement an end-to-end DIP-based fully 3D PET image reconstruction method that incorporates a forward-projection model into a loss function. To implement a practical fully 3D PET image reconstruction, which could not be performed due to a graphics processing unit memory limitation, we modify the DIP optimization to block-iteration and sequentially learn an ordered sequence of block sinograms. Furthermore, the relative difference penalty (RDP) term was added to the loss function to enhance the quantitative PET image accuracy. We evaluated our proposed method using Monte Carlo simulation with [$^{18}$F]FDG PET data of a human brain and a preclinical study on monkey brain [$^{18}$F]FDG PET data. The proposed method was compared with the maximum-likelihood expectation maximization (EM), maximum-a-posterior EM with RDP, and hybrid DIP-based PET reconstruction methods. The simulation results showed that the proposed method improved the PET image quality by reducing statistical noise and preserved a contrast of brain structures and inserted tumor compared with other algorithms. In the preclinical experiment, finer structures and better contrast recovery were obtained by the proposed method. This indicated that the proposed method can produce high-quality images without a prior training dataset. Thus, the proposed method is a key enabling technology for the straightforward and practical implementation of end-to-end DIP-based fully 3D PET image reconstruction.
translated by 谷歌翻译
Color is a critical design factor for web pages, affecting important factors such as viewer emotions and the overall trust and satisfaction of a website. Effective coloring requires design knowledge and expertise, but if this process could be automated through data-driven modeling, efficient exploration and alternative workflows would be possible. However, this direction remains underexplored due to the lack of a formalization of the web page colorization problem, datasets, and evaluation protocols. In this work, we propose a new dataset consisting of e-commerce mobile web pages in a tractable format, which are created by simplifying the pages and extracting canonical color styles with a common web browser. The web page colorization problem is then formalized as a task of estimating plausible color styles for a given web page content with a given hierarchical structure of the elements. We present several Transformer-based methods that are adapted to this task by prepending structural message passing to capture hierarchical relationships between elements. Experimental results, including a quantitative evaluation designed for this task, demonstrate the advantages of our methods over statistical and image colorization methods. The code is available at https://github.com/CyberAgentAILab/webcolor.
translated by 谷歌翻译
Edema is a common symptom of kidney disease, and quantitative measurement of edema is desired. This paper presents a method to estimate the degree of edema from facial images taken before and after dialysis of renal failure patients. As tasks to estimate the degree of edema, we perform pre- and post-dialysis classification and body weight prediction. We develop a multi-patient pre-training framework for acquiring knowledge of edema and transfer the pre-trained model to a model for each patient. For effective pre-training, we propose a novel contrastive representation learning, called weight-aware supervised momentum contrast (WeightSupMoCo). WeightSupMoCo aims to make feature representations of facial images closer in similarity of patient weight when the pre- and post-dialysis labels are the same. Experimental results show that our pre-training approach improves the accuracy of pre- and post-dialysis classification by 15.1% and reduces the mean absolute error of weight prediction by 0.243 kg compared with training from scratch. The proposed method accurately estimate the degree of edema from facial images; our edema estimation system could thus be beneficial to dialysis patients.
translated by 谷歌翻译
Peripheral blood oxygen saturation (SpO2), an indicator of oxygen levels in the blood, is one of the most important physiological parameters. Although SpO2 is usually measured using a pulse oximeter, non-contact SpO2 estimation methods from facial or hand videos have been attracting attention in recent years. In this paper, we propose an SpO2 estimation method from facial videos based on convolutional neural networks (CNN). Our method constructs CNN models that consider the direct current (DC) and alternating current (AC) components extracted from the RGB signals of facial videos, which are important in the principle of SpO2 estimation. Specifically, we extract the DC and AC components from the spatio-temporal map using filtering processes and train CNN models to predict SpO2 from these components. We also propose an end-to-end model that predicts SpO2 directly from the spatio-temporal map by extracting the DC and AC components via convolutional layers. Experiments using facial videos and SpO2 data from 50 subjects demonstrate that the proposed method achieves a better estimation performance than current state-of-the-art SpO2 estimation methods.
translated by 谷歌翻译
Hyperparameter optimization (HPO) is essential for the better performance of deep learning, and practitioners often need to consider the trade-off between multiple metrics, such as error rate, latency, memory requirements, robustness, and algorithmic fairness. Due to this demand and the heavy computation of deep learning, the acceleration of multi-objective (MO) optimization becomes ever more important. Although meta-learning has been extensively studied to speedup HPO, existing methods are not applicable to the MO tree-structured parzen estimator (MO-TPE), a simple yet powerful MO-HPO algorithm. In this paper, we extend TPE's acquisition function to the meta-learning setting, using a task similarity defined by the overlap in promising domains of each task. In a comprehensive set of experiments, we demonstrate that our method accelerates MO-TPE on tabular HPO benchmarks and yields state-of-the-art performance. Our method was also validated externally by winning the AutoML 2022 competition on "Multiobjective Hyperparameter Optimization for Transformers".
translated by 谷歌翻译
Mobile stereo-matching systems have become an important part of many applications, such as automated-driving vehicles and autonomous robots. Accurate stereo-matching methods usually lead to high computational complexity; however, mobile platforms have only limited hardware resources to keep their power consumption low; this makes it difficult to maintain both an acceptable processing speed and accuracy on mobile platforms. To resolve this trade-off, we herein propose a novel acceleration approach for the well-known zero-means normalized cross correlation (ZNCC) matching cost calculation algorithm on a Jetson Tx2 embedded GPU. In our method for accelerating ZNCC, target images are scanned in a zigzag fashion to efficiently reuse one pixel's computation for its neighboring pixels; this reduces the amount of data transmission and increases the utilization of on-chip registers, thus increasing the processing speed. As a result, our method is 2X faster than the traditional image scanning method, and 26% faster than the latest NCC method. By combining this technique with the domain transformation (DT) algorithm, our system show real-time processing speed of 32 fps, on a Jetson Tx2 GPU for 1,280x384 pixel images with a maximum disparity of 128. Additionally, the evaluation results on the KITTI 2015 benchmark show that our combined system is more accurate than the same algorithm combined with census by 7.26%, while maintaining almost the same processing speed.
translated by 谷歌翻译
在化学厂的运行过程中,必须始终保持产品质量,并应最大程度地降低规范产品的生产。因此,必须测量与产品质量相关的过程变量,例如工厂各个部分的材料的温度和组成,并且必须根据测量结果进行适当的操作(即控制)。一些过程变量(例如温度和流速)可以连续,即时测量。但是,其他变量(例如成分和粘度)只能通过从植物中抽样物质后进行耗时的分析来获得。已经提出了软传感器,用于估算从易于测量变量实时获得的过程变量。但是,在未记录的情况下(推断),传统统计软传感器的估计精度(由记录的测量值构成)可能非常差。在这项研究中,我们通过使用动态模拟器来估算植物的内部状态变量,该模拟器可以根据化学工程知识和人工智能(AI)技术估算和预测未记录的情况,称为增强学习,并建议使用使用估计植物的内部状态变量作为软传感器。此外,我们描述了使用此类软传感器的植物操作和控制的前景以及为拟议系统获得必要的预测模型(即模拟器)的方法。
translated by 谷歌翻译
我们将2D盲点估计作为道路场景理解的关键视觉任务。通过自动检测从车辆有利位置阻塞的道路区域,我们可以主动提醒手动驾驶员或自动驾驶系统,以实现事故的潜在原因(例如,引起人们对孩子可能逃脱的道路区域的注意)。在完整3D中检测盲点将是具有挑战性的,因为即使汽车配备了LIDAR,3D推理也会非常昂贵且容易发生。相反,我们建议从单眼相机中学习估计2D中的盲点。我们通过两个步骤实现这一目标。我们首先引入了一种自动方法,用于通过利用单眼深度估计,语义细分和SLAM来生成``地面真相''盲点训练数据,以进行任意驾驶视频。关键的想法是在3D中推理,但要从2D图像定义为那些目前看不见但在不久的将来看到的道路区域。我们使用此自动离线盲点估计来构建一个大规模数据集,我们称之为道路盲点(RBS)数据集。接下来,我们介绍BlindSpotnet(BSN),这是一个简单的网络,该网络完全利用此数据集,以完全自动估算框架盲点概率图,以用于任意驾驶视频。广泛的实验结果证明了我们的RBS数据集的有效性和BSN的有效性。
translated by 谷歌翻译
重复是一种反应,可以在对话中重复上一位演讲者的话语中的单词。如语言研究所述,重复对于与他人建立信任至关重要。在这项工作中,我们专注于重复生成。据我们所知,这是解决重复产生的第一种神经方法。我们提出了加权标签平滑,一种平滑方法,用于明确学习在微调过程中重复哪些单词,以及一种重复评分方法,可以在解码过程中输出更合适的重复。我们进行了自动和人类评估,涉及将这些方法应用于预先训练的语言模型T5来产生重复。实验结果表明,我们的方法在两种评估中都超过了基线。
translated by 谷歌翻译